Problem 1 (2.5)

$P(T^+ \cap D) = P(T^+|D)P(D) = \frac{98} {100} \times \frac{1} {100,000} = \frac {98} {10,000,000} $
$P(T^+ \cap D^C) = P(T^+|D^C)P(D^C) = \frac{1} {100} \times \frac{99,999} {100,000} = \frac {99,999} {10,000,000} $
$P(T^+) = P(T^+ \cap D) + P(T^+ \cap D^C) = \frac {98} {10,000,000} + \frac {99,999} {10,000,000} = \frac {100,097}{10,000,000}$
$P(D|T^+) = \frac{P(T^+|D)P(D)} {P(T^+)} = \frac{P(T^+ \cap D)} {P(T^+)} = \frac {98} {10,000,000} / \frac {100,097}{10,000,000} = \frac {98} {100,097} \approx .001$

Problem 2

To minimize cost, we select a camera with a 512 x 512 resolution. With an 80 mm lens, the defect size and accompanying pixel correspondence is calculated:

$Total \space Pixel \ Size = 8 \space \mu m + 2 \space \mu m = 10 \space \mu m$
$\frac {Defect \space Size} {Image \space Size} = \frac {Defect \space Image \space Size} {Sensor \space Size}=\frac {.8 \space mm} {80 \space mm} = \frac {Defect \space Image \space Size} {512 * 10 \space \mu m} $
$Defect \space Image \space Size = 51.2 \space \mu m $
$Defect \space Image \space Size = {51.2 \space \mu m} \times \frac {1 \space px} {10 \space \mu m} = 5.12 \space px $
$Defect \space Pizel \space Size = \frac {8 \space \mu m} {10 \space \mu m} \times 5.12 \space px = 4.096 \space px$

Since we only needed 2 pixels for a defect to be detectable, this resolution and lens pairing is acceptabble. We next calculate the necessary distance from the sample to the camera lens.

$ \frac {25 \space mm} {512 \space pixels \times 10 \space \mu m} = \frac {Camera \space Distance}{80 \space mm}$
$ Camera \space Distance = 390.6 \space mm $

This seems like a reasonable distance. Therefore, an 80 mm lens needs to be paired with a 512 x 512 camera at a distance of 390.6 mm from the sample to automate the process.

image.png

Problem 2 (2.7a)

$2048 \space lines \times \frac{1 \space line \space pair}{2 \space lines} \times \frac {1}{5 \space cm} \times \frac {1 \space cm}{10 \space mm} = 20.48 \space line \space pairs/mm$

Problem 2 (2.7b)

$Pixel \space Diagonal = \sqrt{2048^2 + 2048^2} \space px = 2896.3094... \space px$
$Image \space Diagonal = \sqrt{2^2 + 2^2} \space in = 2.8284... \space in$
$\frac{2896.3094... \space px}{2.8284... \space in} = 1024 \space dpi$

Problem 2 (2.8)

$ \frac {7 \space mm}{35 \space mm} = \frac {Image \space Size}{500 \space mm} $
$ Image \space Size = 100 \space mm$
$1024 \space lines \times \frac{1 \space line \space pair}{2 \space lines} \times \frac{1}{100 \space mm} = 5.12 \space line \space pairs/mm $

image.png

Project 1

I first attempted simple filters for denoising: median and gaussian filtering. These are implemented below:

/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: FutureWarning: The behavior of rgb2gray will change in scikit-image 0.19. Currently, rgb2gray allows 2D grayscale image to be passed as inputs and leaves them unmodified as outputs. Starting from version 0.19, 2D arrays will be treated as 1D images with 3 channels.
  after removing the cwd from sys.path.

The median produced slightly sharper results than the gaussian, and the last image, a slight gaussian filter applied to the median filter is the best from this set of images in my opinion.

From here, I converted the image to the Fourier space to see if there were any patterns. I observed a gridlike pattern and produced a grid-mask to cancel these values out. The mask is implemented below:

/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:18: RuntimeWarning: divide by zero encountered in log
/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:23: RuntimeWarning: divide by zero encountered in log

In the top row, we can see that this has dramatically better results than the naive median and gaussian filter implementations. The last column has a minor gaussian filter applied to dampen some residual patterns which were still remaining. In the second row, I applied a gaussian filter to the mask to reduce the harshness of the cutoffs and this yielded some minor cosmetic improvements.

Below is the bottom right image from the above section with some gaussian unsharpening and contrast stretching applied.

Project 2 (2.7)

Project 3 (2.9)

(2.9a)

(2.9b)

(2.9c)

From the histogram, it's clear the image is heavily biased towards dark values. This makes sense because the background is black. On the high end of the spectrum, there are many pixels of brightness 255. These are likeley saturating the sensor.

Project 4 (2.10)

(2.10a)

(2.10a) The code to calculate central moments (centralMoments4e) is implemented below:

(2.10b)

                Moment     Value
                  Mean        46
              Variance      4369
  Third Central Moment    503521
 Fourth Central Moment  89457514

(2.10c)

                Moment     Value
                  Mean       143
              Variance      2301
  Third Central Moment     20723
 Fourth Central Moment  15801695

(2.10d)

(2.10e)

The values calculated in (b) and (c) are the summations of the histograms. The mean represents the average of the intensity value histogram. The second moment (variance) measures the spread of the curve from this mean value. The third and fourth moments measure the symmetry with respect to the mean. The third measures the skewness of the histogram and the fourth measures the relative flattness.